# Dense vector encoding
Armenian Text Embeddings 1
Armenian text embedding model optimized based on multilingual-e5-base, supporting semantic search and cross-lingual understanding
Text Embedding
Transformers Supports Multiple Languages

A
Metric-AI
578
18
USER Base
Apache-2.0
A sentence embedding extraction model specifically designed for Russian, capable of mapping sentences and paragraphs into a 768-dimensional dense vector space, suitable for tasks like clustering or semantic search.
Text Embedding Other
U
deepvk
2,337
19
Mmlw Retrieval Roberta Large
Apache-2.0
MMLW (I Must Get Better Messages) is a neural text encoder for Polish, optimized for information retrieval tasks.
Text Embedding
Transformers Other

M
sdadas
237.90k
12
Mmlw Retrieval E5 Large
Apache-2.0
MMLW is a neural text encoder for Polish, optimized for information retrieval tasks, capable of converting queries and passages into 1024-dimensional vectors
Text Embedding
Transformers Other

M
sdadas
56
3
Mmlw Retrieval E5 Small
Apache-2.0
MMLW (I Must Get Better Messages) is a neural text encoder for Polish, optimized for information retrieval tasks, capable of converting queries and passages into 384-dimensional vectors.
Text Embedding
Transformers Other

M
sdadas
34
1
Sentence BERTino V2 Mmarco 4m
Apache-2.0
This is an Italian sentence embedding model based on sentence-transformers, which maps text to a 768-dimensional vector space, suitable for semantic search and clustering tasks.
Text Embedding
Transformers Other

S
efederici
16
2
Sentence Transformers E5 Large V2
This is a sentence transformer version based on the intfloat/e5-large-v2 model, capable of mapping sentences and paragraphs into a 1024-dimensional dense vector space, suitable for tasks like clustering or semantic search.
Text Embedding
S
embaas
71.83k
10
Multilingual En Uk Pl Ru
Apache-2.0
This is a multilingual sentence transformer model that supports English, Russian, Ukrainian, and Polish, mapping sentences into a 768-dimensional vector space for tasks such as semantic similarity calculation.
Text Embedding Supports Multiple Languages
M
uaritm
383
2
Roberta Base Bne Finetuned Msmarco Qa Es Mnrl Mn
Apache-2.0
This is a Spanish-based sentence-transformers model specifically designed for question-answering scenarios, capable of mapping sentences and paragraphs into a 768-dimensional vector space, suitable for semantic search and clustering tasks.
Text Embedding Spanish
R
dariolopez
347.38k
5
Ce Esci MiniLM L12 V2
This is a model based on sentence-transformers that maps sentences and paragraphs into a 384-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding
C
metarank
1,132
0
Congen Simcse Model Roberta Base Thai
Apache-2.0
This is a Thai sentence similarity model based on the RoBERTa architecture, capable of mapping sentences into a 768-dimensional vector space, suitable for tasks like semantic search.
Text Embedding
Transformers

C
kornwtp
86
1
COS TAPT N RoBERTa STS
A sentence embedding model based on sentence-transformers that maps text to a 1024-dimensional vector space, suitable for semantic search and text clustering tasks.
Text Embedding
Transformers

C
Kyleiwaniec
14
0
Distiluse Base Multilingual Cased V1
Apache-2.0
This is a multilingual sentence transformer model capable of mapping sentences and paragraphs into a 512-dimensional dense vector space, suitable for tasks such as clustering or semantic search.
Text Embedding
Transformers

D
DataikuNLP
30
0
Featured Recommended AI Models